50 Minutes | Activities + Career Exploration + Exam Prep
Quick Recap — what do we remember from Week 1? (5 min)
Activity 1: Break the AI — test a real LLM and document what you find (18 min)
Activity 2: AI Career Explorer — what jobs exist in AI and what skills do they need? (15 min)
Exam Prep — what to expect, what to focus on (10 min)
Debrief + Questions (2 min)
Without looking at notes — who can explain what a hallucination is and why it happens?
Token — unit of text an LLM reads (~¾ word)
Training — data → learn patterns → fine-tune
Context window — short-term memory, resets each session
Hallucination — confident but false output, not lying
Use a real AI tool (ChatGPT, Claude, Gemini — whichever you have access to) and try to trigger the limitations we covered last week. Document everything.
Tools you can use: ChatGPT (chat.openai.com), Claude (claude.ai), Gemini (gemini.google.com)
You need a phone or laptop. Work in pairs if needed.
You have 18 minutes. Fill in the worksheet on the next slide as you go.
Try each challenge. Write what happened — be specific.
Trigger a hallucination
Ask about a very specific obscure fact, a made-up person, or a recent event. Did it make something up confidently?
Test the knowledge cutoff
Ask about something that happened recently. What does it say? Does it admit it doesn't know or does it guess?
Test an ambiguous prompt
Ask something vague like "Tell me about the big event." What assumptions does the AI make?
Bonus: Find something impressive
What does the AI do really well? Note one thing that genuinely surprised you.
What was the most interesting thing you discovered? Did you successfully trigger a hallucination? What did it say?
Notice how the AI sounds equally confident whether it's right or wrong. There's no "I'm not sure" signal built into how it generates text. That's what makes hallucinations dangerous in high-stakes contexts.
Real-world implication: If you're using AI for research, medical questions, legal questions, or news — you must verify what it tells you. Always.
AI isn't just a subject — it's reshaping almost every career field. Whether you go into medicine, law, design, business, or engineering, you will work alongside AI.
The goal of this activity isn't to push you into tech. It's to help you understand what skills are becoming valuable — and which ones AI can't replace.
Let's look at the landscape of AI-related jobs and what they actually require.
Builds and trains AI models. Heavy coding (Python), maths (statistics, linear algebra).
Avg salary: $120,000–$200,000+ USD/year
Designs the instructions that make AI behave correctly. Needs clear writing + understanding of LLM behaviour. No CS degree required.
Avg salary: $80,000–$130,000 USD/year
Ensures AI systems are fair, safe, and legal. Needs law, philosophy, social science background.
Avg salary: $70,000–$120,000 USD/year
Prepares and analyses training data. Makes decisions from AI outputs. Needs stats, SQL, Python.
Avg salary: $75,000–$130,000 USD/year
Designs interfaces for AI products — how the human and AI interact. Needs design thinking + understanding of AI limitations.
Avg salary: $85,000–$140,000 USD/year
Doctors, lawyers, journalists, teachers — every profession now needs people who can critically evaluate and use AI tools. Domain expertise + AI literacy = very valuable.
Premium on salary in any field
Technical Skills
Python programming
Statistics & probability
Data analysis (SQL, Excel)
Understanding of ML concepts
Human Skills AI Can't Replace
Critical thinking & fact-checking
Ethical judgment & values
Clear communication & writing
Creativity & original ideas
The most valuable people in AI combine domain expertise with the ability to critically evaluate what AI produces.
Answer these 3 questions honestly. There's no right answer — this is about thinking about your own future.
What field or career are you currently interested in? (Any field — doesn't have to be tech)
How do you think AI will change that field in the next 10 years? Name one specific change you can imagine.
What is one skill — technical or human — you want to develop that would make you more valuable in an AI-influenced world?
Search for one real job listing in your field that mentions AI. What does it ask for?
What field are you interested in? What change do you expect AI to bring to it?
Key takeaway: You don't have to become a programmer to benefit from understanding AI. The people who will thrive are those who understand what AI can and can't do — and use that knowledge in their own field.
The jobs that AI is replacing fastest are routine, repetitive tasks. The jobs that are growing require human judgment, creativity, and the ability to work with AI tools critically.
Format: 15 multiple-choice questions
Covers: Everything from Weeks 1 & 2
What it tests: Understanding of concepts, not just vocabulary
You won't just be asked "what is a token?" You'll be asked to apply the concept — like "what happens when a context window is exceeded during a long tutoring session?"
How LLMs Work
✓ What a token is and why it matters
✓ The 3-step training process
✓ What fine-tuning does
✓ What a context window is + what happens when exceeded
Why They Fail
✓ What hallucinations are + 3 causes
✓ Real-world examples of hallucinations
✓ Knowledge cutoff + no real-time info
✓ Bias in training data + why it happens
Hallucination ≠ lying
The model has no concept of truth. It predicts plausible text. That's it.
Pattern recognition ≠ understanding
It can write a poem about grief without experiencing grief. It doesn't know what grief means.
Context window ≠ long-term memory
The app might save your history. The model itself starts fresh every session.
More data ≠ no hallucinations
Ambiguous prompts and data gaps still cause hallucinations regardless of model size.
Q1: A student asks an AI for 5 sources on climate change. The AI provides 5 citations that look real but don't exist. Which limitation caused this?
→ Hallucination — pattern matching without understanding (generates plausible-looking citations)
Q2: A chatbot forgets the student's name halfway through a tutoring session even though they introduced themselves at the start. Why?
→ Context window exceeded — the early part of the conversation was dropped
Q3: An LLM gives better legal advice in English than in Arabic. What is the most likely reason?
→ Bias from training data — far more English legal content exists in the training data
The Theory (Week 1)
Tokens — unit of text (~¾ word)
Training — data → patterns → fine-tune
Context window — short-term memory with hard limit
Hallucinations — 3 causes, not lying
Bias, cutoff, no understanding
The Practice (Week 2)
Tested a real LLM and documented its failures
Identified how to trigger hallucinations
Explored AI career paths and required skills
Reflected on how AI will affect your own future field
Study your handout. Review the key distinctions. Get some sleep.
The exam tests understanding — not just memory.